74 research outputs found
Coordinating a multi-retailer decentralized distribution system with random demand based on buyback and compensation contracts
Purpose: The purpose of this paper is to set up the coordinating mechanism for a
decentralized distribution system consisting of a manufacturer and multiple independent
retailers by means of contracts. It is in the two-stage supply chain system that all retailers sell an
identical product made by the manufacturer and determine their order quantities which directly
affect the expected profit of the supply chain with random demand.
Design/methodology/approach: First comparison of the optimal order quantities in the
centralized and decentralized system shows that the supply chain needs coordination. Then the
coordination model is given based on buyback cost and compensation benefit. Finally the
coordination mechanism is set up in which the manufacturer as the leader uses a buyback policy
to incentive these retailers and the retailers pay profit returns to compensate the manufacturer.
Findings: The results of a numerical example show that the perfect supply chain coordination
and the flexible allocation of the profit can be achieved in the multi-retailer supply chain by the
buyback and compensation contracts.
Research limitations: The results based on assumptions might not completely hold in practice
and the paper only focuses on studying a single product in two-stage supply chain Practical implications: The coordination mechanism is applicable to a realistic supply chain
under a private information setting and the research results is the foundation of further
developing the coordination mechanism for a realistic multi-stage supply chain system with
more products.
Originality/value: This paper focused on studying the coordination mechanism for a
decentralized multi-retailer supply chain by the joint application of the buyback and
compensation contracts. Furthermore the perfect supply chain coordination and the flexible
allocation of the profit are achieved.Peer Reviewe
Coordinating a multi-retailer decentralized distribution system with random demand based on buyback and compensation contracts
Purpose: The purpose of this paper is to set up the coordinating mechanism for a
decentralized distribution system consisting of a manufacturer and multiple independent
retailers by means of contracts. It is in the two-stage supply chain system that all retailers sell an
identical product made by the manufacturer and determine their order quantities which directly
affect the expected profit of the supply chain with random demand.
Design/methodology/approach: First comparison of the optimal order quantities in the
centralized and decentralized system shows that the supply chain needs coordination. Then the
coordination model is given based on buyback cost and compensation benefit. Finally the
coordination mechanism is set up in which the manufacturer as the leader uses a buyback policy
to incentive these retailers and the retailers pay profit returns to compensate the manufacturer.
Findings: The results of a numerical example show that the perfect supply chain coordination
and the flexible allocation of the profit can be achieved in the multi-retailer supply chain by the
buyback and compensation contracts.
Research limitations: The results based on assumptions might not completely hold in practice
and the paper only focuses on studying a single product in two-stage supply chain Practical implications: The coordination mechanism is applicable to a realistic supply chain
under a private information setting and the research results is the foundation of further
developing the coordination mechanism for a realistic multi-stage supply chain system with
more products.
Originality/value: This paper focused on studying the coordination mechanism for a
decentralized multi-retailer supply chain by the joint application of the buyback and
compensation contracts. Furthermore the perfect supply chain coordination and the flexible
allocation of the profit are achieved.Peer Reviewe
Target-Driven Structured Transformer Planner for Vision-Language Navigation
Vision-language navigation is the task of directing an embodied agent to
navigate in 3D scenes with natural language instructions. For the agent,
inferring the long-term navigation target from visual-linguistic clues is
crucial for reliable path planning, which, however, has rarely been studied
before in literature. In this article, we propose a Target-Driven Structured
Transformer Planner (TD-STP) for long-horizon goal-guided and room layout-aware
navigation. Specifically, we devise an Imaginary Scene Tokenization mechanism
for explicit estimation of the long-term target (even located in unexplored
environments). In addition, we design a Structured Transformer Planner which
elegantly incorporates the explored room layout into a neural attention
architecture for structured and global planning. Experimental results
demonstrate that our TD-STP substantially improves previous best methods'
success rate by 2% and 5% on the test set of R2R and REVERIE benchmarks,
respectively. Our code is available at https://github.com/YushengZhao/TD-STP
Soft Margin Estimation with Various Separation Levels for LVCSR
ABSTRACT We continue our previous work on soft margin estimation (SME) to large vocabulary continuous speech recognition (LVCSR) in two new aspects. The first is to formulate SME with different unit separation. SME methods focusing on string-, word-, and phonelevel separation are defined. The second is to compare SME with all the popular conventional discriminative training (DT) methods, including maximum mutual information estimation (MMIE), minimum classification error (MCE), and minimum word/phone error (MWE/MPE). Tested on the 5k-word Wall Street Journal task, all the SME methods achieves a relative word error rate (WER) reduction from 17% to 25% over our baseline. Among them, phone-level SME obtains the best performance. Its performance is slightly better than that of MPE, and much better than those of other conventional DT methods. With the comprehensive comparison with conventional DT methods, SME demonstrates its success on LVCSR tasks
On decoder-only architecture for speech-to-text and large language model integration
Large language models (LLMs) have achieved remarkable success in the field of
natural language processing, enabling better human-computer interaction using
natural language. However, the seamless integration of speech signals into LLMs
has not been explored well. The "decoder-only" architecture has also not been
well studied for speech processing tasks. In this research, we introduce
Speech-LLaMA, a novel approach that effectively incorporates acoustic
information into text-based large language models. Our method leverages
Connectionist Temporal Classification and a simple audio encoder to map the
compressed acoustic features to the continuous semantic space of the LLM. In
addition, we further probe the decoder-only architecture for speech-to-text
tasks by training a smaller scale randomly initialized speech-LLaMA model from
speech-text paired data alone. We conduct experiments on multilingual
speech-to-text translation tasks and demonstrate a significant improvement over
strong baselines, highlighting the potential advantages of decoder-only models
for speech-to-text conversion
SpeechLM: Enhanced Speech Pre-Training with Unpaired Textual Data
How to boost speech pre-training with textual data is an unsolved problem due
to the fact that speech and text are very different modalities with distinct
characteristics. In this paper, we propose a cross-modal Speech and Language
Model (SpeechLM) to explicitly align speech and text pre-training with a
pre-defined unified discrete representation. Specifically, we introduce two
alternative discrete tokenizers to bridge the speech and text modalities,
including phoneme-unit and hidden-unit tokenizers, which can be trained using a
small amount of paired speech-text data. Based on the trained tokenizers, we
convert the unlabeled speech and text data into tokens of phoneme units or
hidden units. The pre-training objective is designed to unify the speech and
the text into the same discrete semantic space with a unified Transformer
network. Leveraging only 10K text sentences, our SpeechLM gets a 16\% relative
WER reduction over the best base model performance (from 6.8 to 5.7) on the
public LibriSpeech ASR benchmark. Moreover, SpeechLM with fewer parameters even
outperforms previous SOTA models on CoVoST-2 speech translation tasks. We also
evaluate our SpeechLM on various spoken language processing tasks under the
universal representation evaluation framework SUPERB, demonstrating significant
improvements on content-related tasks. Our code and models are available at
https://aka.ms/SpeechLM.Comment: 14 page
RNA sequencing reveals CircRNA expression profiles in chicken embryo fibroblasts infected with velogenic Newcastle disease virus
IntroductionNewcastle disease virus (NDV) is an important avian pathogen prevalent worldwide; it has an extensive host range and seriously harms the poultry industry. Velogenic NDV strains exhibit high pathogenicity and mortality in chickens. Circular RNAs (circRNAs) are among the most abundant and conserved eukaryotic transcripts. They are part of the innate immunity and antiviral response. However, the relationship between circRNAs and NDV infection is unclear.MethodsIn this study, we used circRNA transcriptome sequencing to analyze the differences in circRNA expression profiles post velogenic NDV infection in chicken embryo fibroblasts (CEFs). Gene ontology (GO) and Kyoto Encyclopedia of Genes and Genomes (KEGG) enrichment analyses were used to reveal significant enrichment of differentially expressed (DE) circRNAs. The circRNA- miRNA-mRNA interaction networks were further predicted. Moreover, circ-EZH2 was selected to determine its effect on NDV infection in CEFs.ResultsNDV infection altered circRNA expression profiles in CEFs, and 86 significantly DE circRNAs were identified. GO and KEGG enrichment analyses revealed significant enrichment of DE circRNAs for metabolism-related pathways, such as lysine degradation, glutaminergic synapse, and alanine, aspartic-acid, and glutamic-acid metabolism. The circRNA- miRNA-mRNA interaction networks further demonstrated that CEFs might combat NDV infection by regulating metabolism through circRNA-targeted mRNAs and miRNAs. Furthermore, we verified that circ-EZH2 overexpression and knockdown inhibited and promoted NDV replication, respectively, indicating that circRNAs are involved in NDV replication.ConclusionsThese results demonstrate that CEFs exert antiviral responses by forming circRNAs, offering new insights into the mechanisms underlying NDV-host interactions
WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing
Self-supervised learning (SSL) achieves great success in speech recognition,
while limited exploration has been attempted for other speech processing tasks.
As speech signal contains multi-faceted information including speaker identity,
paralinguistics, spoken content, etc., learning universal representations for
all speech tasks is challenging. To tackle the problem, we propose a new
pre-trained model, WavLM, to solve full-stack downstream speech tasks. WavLM
jointly learns masked speech prediction and denoising in pre-training. By this
means, WavLM does not only keep the speech content modeling capability by the
masked speech prediction, but also improves the potential to non-ASR tasks by
the speech denoising. In addition, WavLM employs gated relative position bias
for the Transformer structure to better capture the sequence ordering of input
speech. We also scale up the training dataset from 60k hours to 94k hours.
WavLM Large achieves state-of-the-art performance on the SUPERB benchmark, and
brings significant improvements for various speech processing tasks on their
representative benchmarks. The code and pre-trained models are available at
https://aka.ms/wavlm.Comment: Submitted to the Journal of Selected Topics in Signal Processing
(JSTSP
Transcriptional Responses of Leptospira interrogans to Host Innate Immunity: Significant Changes in Metabolism, Oxygen Tolerance, and Outer Membrane
Leptospirosis is an important tropical disease around the world, particularly in humid tropical and subtropical countries. As a major pathogen of this disease, Leptospira interrogans can be shed from the urine of reservoir hosts, survive in soil and water, and infect humans through broken skin or mucous membranes. Recently, host adaptability and immune evasion of L. interrogans to host innate immunity was partially elucidated in infection or animal models. A better understanding of the molecular mechanisms of L. interrogans in response to host innate immunity is required to learn the nature of early leptospirosis. This study focused on the transcriptome of L. interrogans during host immune cells interaction. Significant changes in energy metabolism, oxygen tolerance and outer membrane protein profile were identified as potential immune evasion strategies by pathogenic Leptospira during the early stage of infection. The major outer membrane proteins (OMPs) of L. interrogans may be regulated by the major OmpR specific transcription factor (LB333). These results provide a foundation for further studying the pathogenesis of leptospirosis, as well as identifying gene regulatory networks in Leptospira spp
- β¦